Asynchronous Newton-Raphson Consensus for Distributed Convex Optimization*
نویسندگان
چکیده
منابع مشابه
Asynchronous Newton-Raphson Consensus for Robust Distributed Convex Optimization
A general trend in the development of distributed 1 convex optimization procedures is to robustify existing algo2 rithms so that they can tolerate the characteristics and condi3 tions of communications among real devices. This manuscript 4 follows this tendency by robustifying a promising distributed 5 convex optimization procedure known as Newton-Raphson 6 consensus. More specifically, we modi...
متن کاملAsynchronous Newton-Raphson Consensus for Distributed Convex Optimization !
We consider the distributed unconstrained minimization of separable convex cost functions, where the global cost is given by the sum of several local and private costs, each associated to a specific agent of a given communication network. We specifically address an asynchronous distributed optimization technique called Newton-Raphson Consensus. Beside having low computational complexity, low co...
متن کاملNewton - Raphson Consensus 1 for Distributed Convex Optimization
5 We address the problem of distributed unconstrained convex optimization under separability assumptions, i.e., the framework 6 where a network of agents, each endowed with a local private multidimensional convex cost and subject to communication constraints, 7 wants to collaborate to compute the minimizer of the sum of the local costs. We propose a design methodology that combines average 8 co...
متن کاملAsynchronous Distributed ADMM for Consensus Optimization
Distributed optimization algorithms are highly attractive for solving big data problems. In particular, many machine learning problems can be formulated as the global consensus optimization problem, which can then be solved in a distributed manner by the alternating direction method of multipliers (ADMM) algorithm. However, this suffers from the straggler problem as its updates have to be synch...
متن کاملDistributed Newton Methods for Strictly Convex Consensus Optimization Problems in Multi-Agent Networks
Various distributed optimization methods have been developed for consensus optimization problems in multi-agent networks. Most of these methods only use gradient or subgradient information of the objective functions, which suffer from slow convergence rate. Recently, a distributed Newton method whose appeal stems from the use of second-order information and its fast convergence rate has been de...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IFAC Proceedings Volumes
سال: 2012
ISSN: 1474-6670
DOI: 10.3182/20120914-2-us-4030.00027